Goto

Collaborating Authors

 gary marcus


The Download: hunting for new matter, and Gary Marcus' AI critiques

MIT Technology Review

In 2012, using data from CERN's Large Hadron Collider, researchers discovered a particle called the Higgs boson. In the process, they answered a nagging question: Where do fundamental particles, such as the ones that make up all the protons and neutrons in our bodies, get their mass? When the particle was finally found, scientists celebrated with champagne. A Nobel for two of the physicists who predicted the Higgs boson soon followed. But now, more than a decade later, there is a sense of unease.


I went for a walk with Gary Marcus, AI's loudest critic

MIT Technology Review

Marcus, a professor emeritus at NYU, is a prominent AI researcher and cognitive scientist who has positioned himself as a vocal critic of deep learning and AI. He is a divisive figure. You might recognize him from the spicy feuds on X with AI heavyweights such as Yann LeCun and Geoffrey Hinton. It is on walks like this that Marcus often does most of his tweeting. This week has been a big news week in AI.


AI's 'Fog of War'

The Atlantic - Technology

This is Atlantic Intelligence, an eight-week series in which The Atlantic's leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Earlier this year, The Atlantic published a story by Gary Marcus, a well-known AI expert who has agitated for the technology to be regulated, both in his Substack newsletter and before the Senate. Marcus argued that "this is a moment of immense peril," and that we are teetering toward an "information-sphere disaster, in which bad actors weaponize large language models, distributing their ill-gotten gains through armies of ever more sophisticated bots." I was interested in following up with Marcus given recent events. In the past six weeks, we've seen an executive order from the Biden administration focused on AI oversight; chaos at the influential company OpenAI; and this Wednesday, the release of Gemini, a GPT competitor from Google.


Controlling AI

Communications of the ACM

Gary Marcus: Two Models of AI Oversight -- and How Things Could Go Deeply Wrong https://bit.ly/3Qnxd9A June 12, 2023 Originally published on The Road to AI We Can Trust (http://bit.ly/3juuD3j) The Senate hearing that I participated in a few weeks ago (https://bit.ly/44QxHt1) I was thrilled by what I saw of the Senate that day: genuine interest and genuine humility. Senators acknowledged that they were too slow to figure out what do about social media, that the moves were made then, and that there was now a sense of urgency.


Congress Really Wants to Regulate A.I., But No One Seems to Know How

The New Yorker

In February, 2019, OpenAI, a little-known artificial-intelligence company, announced that its large-language-model text generator, GPT-2, would not be released to the public "due to our concerns about malicious applications of the technology." Among the dangers, the company stated, was a potential for misleading news articles, online impersonation, and automating the production of abusive or faked social-media content and of spam and phishing content. As a consequence, Open AI proposed that "governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems." This week, four years after that warning, members of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law met to discuss "Oversight of A.I.: Rules for Artificial Intelligence." As has been the case with other tech hearings on the Hill, this one came after a new technology with the capacity to fundamentally alter our social and political lives was already in circulation. Like many Americans, the lawmakers became concerned about the pitfalls of large-language-model artificial intelligence in March, when OpenAI released GPT-4, the latest and most polished iteration of its text generator.


Senate warned of 'perfect storm' leading to emerging AI disaster: 'Democracy itself is threatened'

FOX News

Senators on Tuesday got the green light to impose significant federal regulation on artificial intelligence systems, not just from two industry giants, but from an AI expert who warned that the fate of the nation may depend on tough AI rules from Congress. A Senate Judiciary subcommittee heard from OpenAI CEO Sam Altman and IBM Chief Privacy & Trust Officer Christina Montgomery, who both invited federal oversight of AI even though they split on whether a new federal agency is needed. In between those witnesses sat Gary Marcus, the New York University professor emeritus and leader of Uber's AI labs from 2016 to 2017, who issued a stark warning that human life is about to be upended by this unpredictable technology. "They can and will create persuasive lies at a scale humanity has never seen before," Marcus warned of generative AI systems. "Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Marcus warned that AI systems that do severe damage to humans' trust in each other have already been released and that the damage is already mounting. Gary Marcus, professor emeritus at New York University, speaks during a Senate Judiciary subcommittee hearing in Washington, D.C., on Tuesday, May 16, 2023. "A law professor, for example, was accused by a chatbot of sexual harassment.


AI expert taps UN officials to learn how to build a global AI regulatory body

FOX News

Another challenge: Forming an AI regulatory body on a global scale would require significant funding. "We need money," he said. "We need some philanthropists probably to get us started." "It's still a very long road," Marcus told Fox News. "It's a big ask, but I think the time for it is right."


Does AI need a UN? Expert calls for global governing body to police 'billions of pieces of misinformation'

FOX News

Cognitive scientist and AI expert Gary Marcus advocates for the formation of an international body to govern emerging artificial intelligence technologies. The world needs a United Nations-like agency to regulate rapidly advancing artificial intelligence technology, particularly since governments are starting to pass laws that put varying demands on AI companies. "Right now, we have 37 countries that passed laws about artificial intelligence last year, each of them doing their own thing," said Gary Marcus, who hosts the AI-themed podcast, "Humans vs Machines with Gary Marcus." "But there's no coordination between what all of these countries are doing." Without a shared regulatory body, AI companies might be forced to modify their software and offer different versions from country to country -- or even state to state -- to comply with each unique law, according to Marcus.


Shattering reality: Is AI-generated content already good enough to fool the average person?

FOX News

An AI expert said AI technology is already capable of producing content so realistic that some people might be convinced that it's genuine. A world where AI-generated videos and images can dupe the public on a large scale -- a fear of the "Godfather of AI" -- has become a reality, according to an artificial intelligence writer and podcast host. "That moment is already here," said cognitive scientist Gary Marcus, who hosts the AI-centric podcast, "Humans vs Machines with Gary Marcus." "The techniques will only get better and better over the coming years, but they're already good enough that they can probably fool at least some of the people some of the time." Computer scientist Geoffrey Hinton, who is widely considered the "Godfather of AI" and helped develop systems used in software like ChatGPT, recently told The New York Times he feared AI-generated photos, videos and text will soon flood the internet. The average person, as a result, will "not be able to know what is true anymore," he said.


Gary Marcus Used to Call AI Stupid--Now He Calls It Dangerous

WIRED

As a journalist, one thing I appreciate about Gary Marcus is that he always makes time for a chat. The last time we met face-to-face was late last year in New York City, where he fit me in between a series of press interviews, including NPR, CNN, the BBC, and the Big Kahuna, a taping of 60 Minutes with Leslie Stahl. When I called Marcus this week for an update on his Never Ending Tour to critique AI, he made sure to Zoom with me the next day, tweaking his schedule to avoid conflict with a Morning Joe hit. It was a good day for Marcus: the New York Times Sunday Magazine had just gone online with a lengthy Marcus interview conducted by its talk maven, David Marchese, whose previous subjects have included Thomas Piketty, Tom Stoppard, and Iggy Pop. The success of large language models like OpenAI's ChatGPT, Google's Bard, and a host of others has been so spectacular that it's literally scary.